covert channel
Stealing AI Model Weights Through Covert Communication Channels
Barbaza, Valentin, Diaz-Rizo, Alan Rodrigo, Aboushady, Hassan, Raptis, Spyridon, Stratigopoulos, Haralampos-G.
Sorbonne Universit e, CNRS, LIP6, Paris, France Abstract--AI models are often regarded as valuable intellectual property due to the high cost of their development, the competitive advantage they provide, and the proprietary techniques involved in their creation. As a result, AI model stealing attacks pose a serious concern for AI model providers. In this work, we present a novel attack targeting wireless devices equipped with AI hardware accelerators. The attack unfolds in two phases. In the first phase, the victim's device is compromised with a hardware T rojan (HT) designed to covertly leak model weights through a hidden communication channel, without the victim realizing it. In the second phase, the adversary uses a nearby wireless device to intercept the victim's transmission frames during normal operation and incrementally reconstruct the complete weight matrix. The proposed attack is agnostic to both the AI model architecture and the hardware accelerator used. Additionally, we analyze the impact of bit error rates on the reception and propose an error mitigation technique. The effectiveness of the attack is evaluated based on the accuracy of the reconstructed models with stolen weights and the time required to extract them. Finally, we explore potential defense mechanisms. I. Introduction AI models are regarded as valuable assets because their development demands significant investment in data collection, computational resources, and training time. They also offer a competitive edge, as model performance frequently distinguishes companies in the same industry. Furthermore, these models embody proprietary insights, including specialized feature engineering, architectural decisions, and unique training methodologies.
- Europe > France > Île-de-France > Paris > Paris (0.24)
- Europe > United Kingdom > North Sea > Southern North Sea (0.04)
- Europe > Germany > Berlin (0.04)
- Asia (0.04)
MeMoir: A Software-Driven Covert Channel based on Memory Usage
Gonzalez-Gomez, Jeferson, Ibarra-Campos, Jose Alejandro, Sandoval-Morales, Jesus Yamir, Bauer, Lars, Henkel, Jörg
Covert channel attacks have been continuously studied as severe threats to modern computing systems. Software-based covert channels are a typically hard-to-detect branch of these attacks, since they leverage virtual resources to establish illegitimate communication between malicious actors. In this work, we present MeMoir: a novel software-driven covert channel that, for the first time, utilizes memory usage as the medium for the channel. We implemented the new covert channel on two real-world platforms with different architectures: a general-purpose Intel x86-64-based desktop computer and an ARM64-based embedded system. Our results show that our new architecture- and hardware-agnostic covert channel is effective and achieves moderate transmission rates with very low error. Moreover, we present a real use-case for our attack where we were able to communicate information from a Hyper-V virtualized enviroment to a Windows 11 host system. In addition, we implement a machine learning-based detector that can predict whether an attack is present in the system with an accuracy of more than 95% with low false positive and false negative rates by monitoring the use of system memory. Finally, we introduce a noise-based countermeasure that effectively mitigates the attack while inducing a low power overhead in the system compared to other normal applications.
- North America > United States > New York > New York County > New York City (0.05)
- North America > Costa Rica (0.04)
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Karlsruhe (0.04)
Covert Channel Attack to Federated Learning Systems
Costa, Gabriele, Pinelli, Fabio, Soderi, Simone, Tolomei, Gabriele
Federated learning (FL) goes beyond traditional, centralized machine learning by distributing model training among a large collection of edge clients. These clients cooperatively train a global, e.g., cloud-hosted, model without disclosing their local, private training data. The global model is then shared among all the participants which use it for local predictions. In this paper, we put forward a novel attacker model aiming at turning FL systems into covert channels to implement a stealth communication infrastructure. The main intuition is that, during federated training, a malicious sender can poison the global model by submitting purposely crafted examples. Although the effect of the model poisoning is negligible to other participants, and does not alter the overall model performance, it can be observed by a malicious receiver and used to transmit a single bit.
- North America > United States > California > San Francisco County > San Francisco (0.28)
- North America > United States > New York > New York County > New York City (0.05)
- North America > United States > California > San Diego County > San Diego (0.04)
- (6 more...)
Meltdown
Moritz Lipp is a Ph.D. candidate at Graz University of Technology, Flanders, Austria. Michael Schwarz is a postdoctoral researcher at Graz University of Technology, Flanders, Austria. Daniel Gruss is an assistant professor at Graz University of Technology, Flanders, Austria. Thomas Prescher is a chief architect at Cyberus Technology GmbH, Dresden, Germany. Werner Haas is the Chief Technology Officer at Cyberus Technology GmbH, Dresden, Germany.